首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 22 毫秒
1.
Scientists have tried to capture the rich cognitive life of dolphins through field and laboratory studies of their brain anatomy, social lives, communication and perceptual abilities. Encopheliration quotient data sugest a level of intelligence or cognitive processing in the large-brained dolphin that is closer to the human range than are our nearest primate relatives. Field studies indicate a fission-fusion type of social structure, showing social complexity rivaling that found in chimpanzee societies. Notably, cetaceans are the only mammals other than humans that clearly demonstrate vocal learning and parallels in stages of vocal learning have been reported for humans, birds and dolphins. The dolphin's vocal plasticity from infancy through adulthood, in what is probably an 'open' communication system, is likely to be related to their fission-fusion social structure and, specifically, to the fluidity of their short-term associations. However, conflicting evidence exists on the composition and organization of the dolphins whistle repertoire. In general, the level of dolphin performance on complex auditory learning and memory tasks has been compared with that of primates on similar visual tasks; however, dolphins have also demonstrated sophistrcated visual processing abilities. Laboratory studies have also provided suggestive, evidence of minor self-recognition in the dolphin, an ability previously thought to be exclusive to humans humans and apes.  相似文献   

2.
Comparative analysis of the gestural communication of our nearest animal relatives, the great apes, implies that humans should have the biological potential to produce and understand 60–70 gestures, by virtue of shared common descent. These gestures are used intentionally in apes to convey separate requests, rather than as referential items in syntactically structured signals. At present, no such legacy of shared gesture has been described in humans. We suggest that the fate of “ape gestures” in modern human communication is relevant to the debate regarding the evolution of language through a possible intermediate stage of gestural protolanguage.  相似文献   

3.
Echolocating bottlenose dolphins (Tursiops truncatus) discriminate between objects on the basis of the echoes reflected by the objects. However, it is not clear which echo features are important for object discrimination. To gain insight into the salient features, the authors had a dolphin perform a match-to-sample task and then presented human listeners with echoes from the same objects used in the dolphin's task. In 2 experiments, human listeners performed as well or better than the dolphin at discriminating objects, and they reported the salient acoustic cues. The error patterns of the humans and the dolphin were compared to determine which acoustic features were likely to have been used by the dolphin. The results indicate that the dolphin did not appear to use overall echo amplitude, but that it attended to the pattern of changes in the echoes across different object orientations. Human listeners can quickly identify salient combinations of echo features that permit object discrimination, which can be used to generate hypotheses that can be tested using dolphins as subjects.  相似文献   

4.
This study aimed to determine whether the recall of gestures in working memory could be enhanced by verbal or gestural strategies. We also attempted to examine whether these strategies could help resist verbal or gestural interference. Fifty-four participants were divided into three groups according to the content of the training session. This included a control group, a verbal strategy group (where gestures were associated with labels) and a gestural strategy group (where participants repeated gestures and were told to imagine reproducing the movements). During the experiment, the participants had to reproduce a series of gestures under three conditions: “no interference”, gestural interference (gestural suppression) and verbal interference (articulatory suppression). The results showed that task performance was enhanced in the verbal strategy group, but there was no significant difference between the gestural strategy and control groups. Moreover, compared to the “no interference” condition, performance decreased in the presence of gestural interference, except within the verbal strategy group. Finally, verbal interference hindered performance in all groups. The discussion focuses on the use of labels to recall gestures and differentiates the induced strategies from self-initiated strategies.  相似文献   

5.
Spontaneous pointing by bottlenose dolphins (Tursiops truncatus)   总被引:1,自引:0,他引:1  
Two bottlenose dolphins (Tursiops truncatus) participating in a symbolic communication project spontaneously developed behaviors that resembled pointing and gaze alternation. The dolphins' behavior demonstrated several features reminiscent of referential communicative behavior. It was triadic, involving a signaler, receiver, and referent. It was also indicative, specifying a focus of attention. The dolphins' points were distinct from the act of attending to or acting on objects. Spontaneous dolphin pointing was influenced by the presence of a potential receiver, and the distance between that receiver and the dolphin. These findings suggest that dolphins are capable of producing referential gestures. Accepted after revision: 14 August 2001 Electronic Publication  相似文献   

6.
According to Wilson and Fox (2007), working memory for gestures has the same characteristics as the phonological loop. The purpose of our research was to determine whether there is a common articulatory loop for verbal and gestural learning. We carried out two double dissociation experiments. The first involved 84 participants who had to reproduce a series of three gestures under three conditions: control, gestural interference (repeated gestures) and verbal interference (repeated “blah blah”). A significant difference in performance was observed; gestural interference resulted in the weakest performance, while there was no difference between the verbal interference condition and the control group. The second experiment, with 30 participants, involved the memorisation of letters and digits; performance was significantly affected by verbal interference but there was no difference between the gestural interference condition and the control group. The consequences of the dissociations are discussed in relation to Baddeley's (2000) model.  相似文献   

7.
Classical studies on enactment have highlighted the beneficial effects of gestures performed in the encoding phase on memory for words and sentences, for both adults and children. In the present investigation, we focused on the role of enactment for learning from scientific texts among primary-school children. We assumed that enactment would favor the construction of a mental model of the text, and we verified the derived predictions that gestures at the time of encoding would result in greater numbers of correct recollections and discourse-based inferences at recall, as compared to no gestures (Exp. 1), and in a bias to confound paraphrases of the original text with the verbatim text in a recognition test (Exp. 2). The predictions were confirmed; hence, we argue in favor of a theoretical framework that accounts for the beneficial effects of enactment on memory for texts.  相似文献   

8.
A temporal reproduction task is composed of two temporal estimation phases: encoding of the interval to be reproduced, followed by its reproduction. The effect of short-term memory processing on each of these phases was tested in two experiments. In Exp. 1, a memory set was presented, followed by two successive tones bounding the target interval to be reproduced. During the reproduction of the target interval, a probe was presented, and the subject ended the reproduction by pressing one of two keys, depending on the presence or absence of the probe in the memory set. In Exp. 2, probe recognition was required during the encoding of the interval to be reproduced. Whereas in Exp. 1 reproductions lengthened as a function of memory-set size, in Exp. 2 temporal reproductions decreased with set size. These results support attentional models of time estimation and suggest that short-term memory processing interrupts concurrent accumulation of temporal information. Received: 11 September 1997 / Accepted: 2 March 1998  相似文献   

9.
The purpose of this study was to examine whether apraxic-aphasic patients with parietal lesions had difficulty learning lists of gestures and whether the performance deficits they displayed resulted from an inability either to consolidate this information in memory or to retrieve the information once stored. The findings indicate that apraxic-aphasic patients do have difficulty acquiring lists of gestures. This inability to reproduce gestural information was not associated with a retrieval disorder, but instead the apraxic-aphasic subjects could not consolidate the information in memory.  相似文献   

10.
Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.  相似文献   

11.
We assessed how water rescue dogs, which were equally accustomed to respond to gestural and verbal requests, weighted gestural versus verbal information when asked by their owner to perform an action. Dogs were asked to perform four different actions (“sit”, “lie down”, “stay”, “come”) providing them with a single source of information (in Phase 1, gestural, and in Phase 2, verbal) or with incongruent information (in Phase 3, gestural and verbal commands referred to two different actions). In Phases 1 and 2, we recorded the frequency of correct responses as 0 or 1, whereas in Phase 3, we computed a ‘preference index’ (percentage of gestural commands followed over the total commands responded). Results showed that dogs followed gestures significantly better than words when these two types of information were used separately. Females were more likely to respond to gestural than verbal commands and males responded to verbal commands significantly better than females. In the incongruent condition, when gestures and words simultaneously indicated two different actions, the dogs overall preferred to execute the action required by the gesture rather than that required verbally, except when the verbal command “come” was paired with the gestural command “stay” with the owner moving away from the dog. Our data suggest that in dogs accustomed to respond to both gestural and verbal requests, gestures are more salient than words. However, dogs’ responses appeared to be dependent also on the contextual situation: dogs’ motivation to maintain proximity with an owner who was moving away could have led them to make the more ‘convenient’ choices between the two incongruent instructions.  相似文献   

12.
The present study assessed how dogs weigh gestural versus verbal information communicated to them by humans in transitive actions. The dogs were trained by their owners to fetch an object under three conditions: a bimodal congruent condition characterized by using gestures and voices simultaneously; a unimodal gestural condition characterized by using only gestures; and a unimodal verbal condition characterized by using only voices. An additional condition, defined as a bimodal incongruent condition, was later added, in which the gesture contrasted with the verbal command, that is, the owner indicated an object while pronouncing the name of the other object visible to dogs. In the incongruent condition, seven out of nine dogs choose to follow the gestural indication and performed above chance, two were at chance, whereas none of the dogs followed the verbal cues above chance. The dogs, as a group, performed above chance the gestural command in 73.6% of cases. The analysis of latencies in the above-mentioned four conditions exhibited significant differences. The unimodal verbal and the gestural conditions recorded a slower performance than both the bimodal incongruent and congruent conditions. No statistical differences were observed between the unimodal and bimodal conditions. Our results demonstrate that dogs, trained to respond equally well to gestural and verbal commands, choose to follow the indication provided by the gestural command than the verbal one to a significant extent in transitive actions. Furthermore, the responses to bimodal conditions were found to be quicker than the unimodal ones.  相似文献   

13.
Two experiments done with a short-term memory paradigm examined the influence of shifts in the starting position on the reproduction of kinesthetic location (Exp. 1) and on distance cues (Exp. 2). We assessed possible causes of the systematic pattern of undershooting and overshooting as related to the shift in the starting position. In each experiment, two groups of 10 students were given 25 trials, and each had criterion and reproduction tasks involving linear-positioning movements with a 10-sec. retention interval. Each experiment had two independent variables, the group of subjects and the shift in the starting position. The two groups differed in the possible sources of information, the distance moved (Exp. 1) or the end-location (Exp. 2), which were assumed to cause undershooting and overshooting during reproduction. Analysis showed that the information about the distance moved may produce undershooting and overshooting in reproduction of the end-location (Exp. 2). Also, the information about the end-location may produce undershooting and overshooting in reproduction of the distance moved (Exp. 2). The findings were further evidence of interference between location and distance cues in motor short-term memory.  相似文献   

14.
The purpose of the present study was to examine the changes of the tickle sensation reflected in effects of the direction of attention to one's own body (Exp. 1) and the expectation of the experimenter's gestures (Exp. 2). In Exp. 1, for 15 subjects, when tickled on their own foot soles, the tickle sensation was not significantly changed by attending to the stimulus and to one's own sole. These results suggested the importance of instructions. In Exp. 2, 6 subjects were tickled and were required to report their experience while they looked at the experimenter's gestures and were tickled. Although the subjects' stimulations were unaffected by looking at the gestures, the tactual stimulus elicited a tickle sensation. From these results, quantitative and qualitative differences in subjects' tickle sensation may be identified.  相似文献   

15.
Social groups of gorillas were observed in three captive facilities and one African field site. Cases of potential gesture use, totalling 9,540, were filtered by strict criteria for intentionality, giving a corpus of 5,250 instances of intentional gesture use. This indicated a repertoire of 102 gesture types. Most repertoire differences between individuals and sites were explicable as a consequence of environmental affordances and sampling effects: overall gesture frequency was a good predictor of universality of occurrence. Only one gesture was idiosyncratic to a single individual, and was given only to humans. Indications of cultural learning were few, though not absent. Six gestures appeared to be traditions within single social groups, but overall concordance in repertoires was almost as high between as within social groups. No support was found for the ontogenetic ritualization hypothesis as the chief means of acquisition of gestures. Many gestures whose form ruled out such an origin, i.e. gestures derived from species-typical displays, were used as intentionally and almost as flexibly as gestures whose form was consistent with learning by ritualization. When using both classes of gesture, gorillas paid specific attention to the attentional state of their audience. Thus, it would be unwarranted to divide ape gestural repertoires into ‘innate, species-typical, inflexible reactions’ and ‘individually learned, intentional, flexible communication’. We conclude that gorilla gestural communication is based on a species-typical repertoire, like those of most other mammalian species but very much larger. Gorilla gestures are not, however, inflexible signals but are employed for intentional communication to specific individuals.  相似文献   

16.
Gesture and early bilingual development   总被引:1,自引:0,他引:1  
The relationship between speech and gestural proficiency was investigated longitudinally (from 2 years to 3 years 6 months, at 6-month intervals) in 5 French-English bilingual boys with varying proficiency in their 2 languages. Because of their different levels of proficiency in the 2 languages at the same age, these children's data were used to examine the relative contribution of language and cognitive development to gestural development. In terms of rate of gesture production, rate of gesture production with speech, and meaning of gesture and speech, the children used gestures much like adults from 2 years on. In contrast, the use of iconic and beat gestures showed differential development in the children's 2 languages as a function of mean length of utterance. These data suggest that the development of these kinds of gestures may be more closely linked to language development than other kinds (such as points). Reasons why this might be so are discussed.  相似文献   

17.
Consecutive search for different targets in the same display is supported by a short-term memory mechanism: Distractors that have recently been inspected in the first search are found more quickly in the second search when they become the target (Exp. 1). Here, we investigated the properties of this memory process. We found that this recency advantage is robust to a delay between the two searches (Exp. 2) and that it is only slightly disrupted by an interference task between the two searches (Exp. 3). Introducing a concurrent secondary task (Exp. 4) showed that the memory representations formed in the first search are based on identity as well as location information. Together, these findings show that the short-term memory that supports repeated visual search stores a complex combination of item identity and location that is robust to disruption by either time or interference.  相似文献   

18.
In the early stages of word learning, children demonstrate considerable flexibility in the type of symbols they will accept as object labels. However, around the 2nd year, as children continue to gain language experience, they become focused on more conventional symbols (e.g., words) as opposed to less conventional symbols (e.g., gestures). During this period of symbolic narrowing, the degree to which children are able to learn other types of labels, such as arbitrary gestures, remains a topic of debate. Thus, the purpose of the current set of experiments was to determine whether a multimodal label (word + gesture) could facilitate 26-month-olds' ability to learn an arbitrary gestural label. We hypothesized that the multimodal label would exploit children's focus on words thereby increasing their willingness to interpret the gestural label. To test this hypothesis, we conducted two experiments. In Experiment 1, 26-month-olds were trained with a multimodal label (word + gesture) and tested on their ability to map and generalize both the arbitrary gesture and the multimodal label to familiar and novel objects. In Experiment 2, 26-month-olds were trained and tested with only the gestural label. The findings revealed that 26-month-olds are able to map and generalize an arbitrary gesture when it is presented multimodally with a word, but not when it is presented in isolation. Furthermore, children's ability to learn the gestural labels was positively related to their reported productive vocabulary, providing additional evidence that children's focus on words actually helped, not hindered, their gesture learning.  相似文献   

19.
The authors tested whether the understanding by dolphins (Tursiops truncatus) of human pointing and head-gazing cues extends to knowing the identity of an indicated object as well as its location. In Experiment 1, the dolphins Phoenix and Akeakamai processed the identity of a cued object (of 2 that were present), as shown by their success in selecting a matching object from among 2 alternatives remotely located. Phoenix was errorless on first trials in this task. In Experiment 2, Phoenix reliably responded to a cued object in alternate ways, either by matching it or by acting directly on it, with each type of response signaled by a distinct gestural command given after the indicative cue. She never confused matching and acting. In Experiment 3, Akeakamai was able to process the geometry of pointing cues (but not head-gazing cues), as revealed by her errorless responses to either a proximal or distal object simultaneously present, when each object was indicated only by the angle at which the informant pointed. The overall results establish that these dolphins could identify, through indicative cues alone, what a human is attending to as well as where.  相似文献   

20.
Coarticulatory acoustic variation is presumed to be caused by temporally overlapping linguistically significant gestures of the vocal tract. The complex acoustic consequences of such gestures can be hypothesized to specify them without recourse to context-sensitive representations of phonetic segments. When the consequences of separate gestures converge on a common acoustic dimension (e.g., fundamental frequency), perceptual parsing of the acoustic consequences of overlapping spoken gestures, rather than associations of acoustic features, is required to resolve the distinct gestural events. Direct tests of this theory were conducted. These tests revealed mutual influences of (1) fundamental frequency during a vowel on prior consonant perception, and (2) consonant identity on following vowel stress and pitch perception. The results of these converging tests lead to the conclusion that speech perception involves a process in which acoustic information for coarticulated gestures is parsed from the stream of speech.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号