首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
2.
Recently, we reported that discrete (4-sec) olfactory cues paired with footshock serve as effective conditioned stimuli (CSs) for potentiating the acoustic startle response in rats using the fear-potentiated startle paradigm. Because odors are such salient cues for the rat, and because of the robust olfactory conditioning observed previously, the current studies investigated second-order fear conditioning using olfactory and visual cues. In Experiments 1 and 2, we used a small number of first-order and second-order training trials on separate days to investigate second-order fear-potentiated startle. Significant potentiated startle was observed in animals receiving Paired/Paired training in both studies, but surprisingly, control animals in the Unpaired/Paired group (Exp. 1) also showed significant potentiated startle to a light S2 at testing. These findings are addressed in the Discussion. Overall, the results of both experiments suggest that olfactory cues serve as efficient S1 and S2 stimuli in second-order fear-potentiated startle paradigms when only a small number of first and second-order training trials are presented.  相似文献   

3.
It is possible that non-specialised cues transmitted by conspecifics guide animals' food search provided they have the cognitive abilities needed to read these cues. Macaques often check the mouth of their group-mates by olfactory and/or visual inspection. We investigated whether Tonkean macaques (Macaca tonkeana) can find the location of distant food on the basis of cues conveyed by group-mates. The subjects of the study were two 6-year-old males, who belonged to a social group of Tonkean macaques raised in semi-free-ranging conditions. In a first experiment, we tested whether the subject can choose between two sites after having sniffed a partner who has just eaten food corresponding to one of the sites. We found that both subjects were able to choose the matching site significantly above the chance level. This demonstrated that Tonkean macaques are capable of delayed olfactory matching. They could associate a food location with an odour conveyed by a partner. In a second experiment, the same subjects were allowed to see their partner through a Plexiglas window. Both subjects were still able to choose the matching site, demonstrating they could rely on visual cues alone. Passive recruitment of partners appears possible in macaques. They can improve their foraging performances by finding the location of environmental resources from olfactory or visual cues conveyed by group-mates. Electronic Publication  相似文献   

4.
Herz RS 《Memory & cognition》2000,28(6):957-964
Two paired-associate memory experiments were conducted to investigate verbal coding in olfactory versus nonolfactory cognition. Experiment 1 examined the effects of switching/not switching odors and visual items to words between encoding and test sessions. Experiment 2 examined switching/not switching perceptual odors and verbal-imagine versions of odors with each other. Experiment 1 showed that memory was impaired for odors but not visual cues when they were switched to their verbal form at test. Experiment 2 revealed that memory was impaired for both odors and verbal-imagine cues when they were switched in format at test and that odor sensory imagery was not accessed by the instruction to imagine a smell. Together, these findings suggest that olfaction is distinguished from other sensory systems by the degree of verbal coding involved in associated cognitive processing.  相似文献   

5.
Two different models (convergent and parallel) potentially describe how recognition memory, the ability to detect the re-occurrence of a stimulus, is organized across different senses. To contrast these two models, rats with or without perirhinal cortex lesions were compared across various conditions that controlled available information from specific sensory modalities. Intact rats not only showed visual, tactile, and olfactory recognition, but also overcame changes in the types of sensory information available between object sampling and subsequent object recognition, e.g., between sampling in the light and recognition in the dark, or vice versa. Perirhinal lesions severely impaired object recognition whenever visual cues were available, but spared olfactory recognition and tactile-based object recognition when tested in the dark. The perirhinal lesions also blocked the ability to recognize an object sampled in the light and then tested for recognition in the dark, or vice versa. The findings reveal parallel recognition systems for different senses reliant on distinct brain areas, e.g., perirhinal cortex for vision, but also show that: (1) recognition memory for multisensory stimuli involves competition between sensory systems and (2) perirhinal cortex lesions produce a bias to rely on vision, despite the presence of intact recognition memory systems serving other senses.  相似文献   

6.
Lampe JF  Andre J 《Animal cognition》2012,15(4):623-630
This study has shown that domestic horses are capable of cross-modal recognition of familiar humans. It was demonstrated that horses are able to discriminate between the voices of a familiar and an unfamiliar human without seeing or smelling them at the same moment. Conversely, they were able to discriminate the same persons when only exposed to their visual and olfactory cues, without being stimulated by their voices. A cross-modal expectancy violation setup was employed; subjects were exposed both to trials with incongruent auditory and visual/olfactory identity cues and trials with congruent cues. It was found that subjects responded more quickly, longer and more often in incongruent trials, exhibiting heightened interest in unmatched cues of identity. This suggests that the equine brain is able to integrate multisensory identity cues from a familiar human into a person representation that allows the brain, when deprived of one or two senses, to maintain recognition of this person.  相似文献   

7.
Many species have been shown to encode multiple sources of information to orient. To examine what kinds of information animals use to locate a goal we manipulated cue rotation, cue availability, and inertial orientation when the food-storing Clark’s nutcracker (Nucifraga columbiana) was searching for a hidden goal in a circular arena. Three groups of birds were used, each with a different goal–landmark distance. As the distance between the goal and the landmark increased, nutcrackers were less accurate in finding the correct direction to the goal than they were at estimating the distance (Experiment 1). To further examine what cues the birds were using to calculate direction, the featural cues within the environment were rotated by 90° and the birds were either oriented when searching (Experiments 2 and 3) or disoriented (Experiment 3). In Experiment 4, all distinctive visual cues were removed (both internal and external to the environment), a novel point of entry was used and the birds were either oriented or disoriented. We found that disorienting the nutcrackers so that they could not use inertial cues did not influence the birds’ total search error. The birds relied heavily but not completely on cues within the environment, as rotating available cues caused them to systematically shift their search behavior. In addition, the birds also relied to some extent on Earth-based cues. These results show the flexible nature of cue use by the Clark’s nutcracker. Our study shows how multiple sources of spatial information may be important for extracting multiple bearings for navigation.  相似文献   

8.
Visual dominance and attention: the Colavita effect revisited   总被引:4,自引:0,他引:4  
Under many conditions, humans display a robust tendency to rely more on visual information than on other forms of sensory information. Colavita (1974) illustrated this visual dominance effect by showing that naive observers typically fail to respond to clearly suprathreshold tones if these are presented simultaneously with a visual target flash. In the present study, we demonstrate that visual dominance influences performance under more complex stimulation conditions and address the role played by attention in mediating this effect. In Experiment 1, we show the Colavita effect in the simple speeded detection of line drawings and naturalistic sounds, whereas in Experiment 2 we demonstrate visual dominance when the task targets (auditory, visual, or bimodal combinations) are embedded among continuous streams of irrelevant distractors. In Experiments 3-5, we address the consequences of varying the probability of occurrence of targets in each sensory modality. In Experiment 6, we further investigate the role played by attention on visual dominance by manipulating perceptual load in either the visual or the auditory modality. Our results demonstrate that selective attention to a particular sensory modality can modulate--although not completely reverse--visual dominance as illustrated by the Colavita effect.  相似文献   

9.
A rodent's survival depends upon its ability to perceive odor cues necessary to guide mate selection, sexual behavior, foraging, territorial formation, and predator avoidance. Arguably, the need to discriminate odor cues in a complex olfactory environment requires a highly adaptable olfactory system. Indeed, it has been proposed that context-dependent modulation of the initial sensory relay could alter olfactory perception. Interestingly, 40% of the adrenergic innervation from the locus coeruleus, fibers that are activated by contextual cues, innervates the first relay station in the olfactory system (the main olfactory bulb). Here we utilize restricted pharmacological inhibition of olfactory bulb noradrenergic receptors in awake-behaving animals. We show that combined blockade of alpha and beta adrenergic receptors does not impair two-odor discrimination behavior per se but does impair the ability to discriminate perceptually similar odors. Thus, contextual cues conveyed by noradrenergic fibers alter processing before the second synapse in the olfactory cortex, resulting in tuning of the ability to discriminate between similar odors.  相似文献   

10.
The ability to recognise kin has been demonstrated in several animal species. However, the mechanisms of kin recognition often remain unknown. The most frequently discussed sensory modalities to recognise kin are visual, olfactory and acoustical cues. Three-spined sticklebacks (Gasterosteus aculeatus) are able to differentiate between kin and non-kin when presented visual and olfactory cues combined. To elucidate, which cues they use to recognise kin female sticklebacks were given the choice between two identical computer animations of courting stickleback males. Next to one animation, water conditioned by a brother was added, while near the other, water from an unrelated male was added. In half of the experiments, the brother was familiar while in the other half he was unfamiliar to the female. Both scenarios were carried out with both outbred and inbred fish. The results showed that the females adjusted their choice behaviour according to relatedness. Furthermore, they were able to recognise both familiar as well as unfamiliar brothers. Inbreeding did not affect this ability. Hence, three-spined sticklebacks are able to recognise their relatives using olfactory cues alone. The cognitive mechanisms underlying this ability were independent from familiarity and not impaired by inbreeding.  相似文献   

11.
Attentional priming and visual search in pigeons   总被引:2,自引:0,他引:2  
Advance information about a target's identity improved visual search efficiency in pigeons. Experiments 1 and 2 compared information supplied by visual cues with information supplied by trial sequences. Reaction times (RTs) were lower when visual cues signaled a single target rather than two. RTs were (Experiment 1) or accuracy improved (Experiment 2) when a sequence of trials presented a single target rather than a mixture of 2. Experiments 3, 4, and 5 considered the selectivity of visual priming by introducing probe trials that reversed the usual cue-target relationship. RT was higher following such miscues than following the usual 1- or 2- target cuing relationships (Experiment 3); the miscuing effect persisted over variations in the target's concealment (Experiments 4 and 5), but did not occur when the target was presented alone (Experiment 4). The findings indicate that priming modifies an attentional mechanism and suggest that this effect accounts for search images.  相似文献   

12.
Crossmodal linkage between the olfactory and visual senses is still largely underexplored. In this study, we investigated crossmodal olfactory-visual associations by testing whether and how visual processing of objects is affected by the presence of olfactory cues. To this end, we explored the influence of prior learned associations between an odour (eg odour of orange) and a visual stimulus naturally associated with that odour (picture of orange) on the movements of the eyes over a complex scene. Participants were asked to freely explore a photograph containing an odour-related visual cue embedded among other objects while being exposed to the corresponding odour (subjects were unaware of the presence of the odour). Eye movements were recorded to analyse the order and distribution of fixations on each object of the scene. Our data show that the odour-related visual cue was explored faster and for a shorter time in the presence of the congruent odour. These findings suggest that odours can affect visual processing by attracting attention to the possible odour source and by facilitating its identification.  相似文献   

13.
Pilfering corvids use observational spatial memory to accurately locate caches that they have seen another individual make. Accordingly, many corvid cache-protection strategies limit the transfer of visual information to potential thieves. Eurasian jays (Garrulus glandarius) employ strategies that reduce the amount of visual and auditory information that is available to competitors. Here, we test whether or not the jays recall and use both visual and auditory information when pilfering other birds’ caches. When jays had no visual or acoustic information about cache locations, the proportion of available caches that they found did not differ from the proportion expected if jays were searching at random. By contrast, after observing and listening to a conspecific caching in gravel or sand, jays located a greater proportion of caches, searched more frequently in the correct substrate type and searched in fewer empty locations to find the first cache than expected. After only listening to caching in gravel and sand, jays also found a larger proportion of caches and searched in the substrate type where they had heard caching take place more frequently than expected. These experiments demonstrate that Eurasian jays possess observational spatial memory and indicate that pilfering jays may gain information about cache location merely by listening to caching. This is the first evidence that a corvid may use recalled acoustic information to locate and pilfer caches.  相似文献   

14.
The abilities of educable mentally retarded adolescents to encode and retrieve words with semantic and acoustic cues were investigated in a free and cued recall task. On each of three trial blocks, seven groups of subjects were presented 20 unrelated stimulus words. Groups received either semantic, acoustic, or no encoding cues along with the stimuli. Free recall was requested from all subjects, followed immediately by a second period of either free recall or cued recall with the semantic or acoustic cues. Semantic cues were most effective when presented both at encoding and retrieval. The subjects were unable to use acoustic information as effective retrieval aids. Results were discussed in terms of encoding dimension dominance and mediational deficiencies.  相似文献   

15.
The sensorimotor transformations necessary for generating appropriate motor commands depend on both current and previously acquired sensory information. To investigate the relative impact (or weighting) of visual and haptic information about object size during grasping movements, we let normal subjects perform a task in which, unbeknownst to the subjects, the object seen (visual object) and the object grasped (haptic object) were never the same physically. When the haptic object abruptly became larger or smaller than the visual object, subjects in the following trials automatically adapted their maximum grip aperture when reaching for the object. This adaptation was not dependent on conscious processes. We analyzed how visual and haptic information were weighted during the course of sensorimotor adaptation. The adaptation process was quicker and relied more on haptic information when the haptic objects increased in size than when they decreased in size. As such, sensory weighting seemed to be molded to avoid prehension error. We conclude from these results that the impact of a specific source of sensory information on the sensorimotor transformation is regulated to satisfy task requirements.  相似文献   

16.
Recent research on social cognition suggests that lifelike visual and vocal information about a person may strongly mediate the impact of prior social categorical knowledge on social judgements. Other research, however, on the contribution of visual cues to impression formation, suggests that they have relatively little impact. This study sought to resolve these conflicting findings by examining the effect of visual cues on social judgements when subjects possess prior social categorical knowledge varying in salience to the experimental task. Videotaped target interviews were monitored by observers in either sound and vision or sound only, and measures were taken of the targets' perceived personality, their ‘actual’ and ‘predicted’ social performance, and social acceptance by observers. Whilst salience of categorization strongly influenced the quality of judgements, visual cues had little if any effect. However, visual cues strongly influenced subjects' confidence in all three sets of judgements, sound and vision subjects being consistently more confident than their sound only counterparts. The findings are discussed in relation to previous research in both social cognition and visual cues.  相似文献   

17.
This study addressed age distributions and experiential qualities of autobiographical memories evoked by different sensory cues. Ninety-three older adults were presented with one of three cue types (word, picture, or odor) and were asked to relate any autobiographical event for the given cue. The main aims were to explore whether (1) the age distribution of olfactory-evoked memories differs from memories cued by words and pictures and (2) the experiential qualities of the evoked memories vary over the different cues. The results showed that autobiographical memories triggered by olfactory information were older than memories associated with verbal and visual information. Specifically, most odor-cued memories were located to the first decade of life (<10 years), whereas memories associated with verbal and visual cues peaked in early adulthood (11–20 years). Also, odor-evoked memories were associated with stronger feelings of being brought back in time and had been thought of less often than memories evoked by verbal and visual information. This pattern of findings suggests that odor-evoked memories may be different from other memory experiences. nt|mis|This work was supported by a grant from the Swedish Research Council (No. F0647/2001) to M.L.  相似文献   

18.
To locate objects in the environment, animals and humans use visual and nonvisual information. We were interested in children's ability to relocate an object on the basis of self-motion and local and distal color cues for orientation. Five- to 9-year-old children were tested on an object location memory task in which, between presentation and test, the availability of local and distal cues was manipulated. Additionally, participants' viewpoint could be changed. We used a Bayesian model selection approach to compare our hypotheses. We found that, to remain oriented in space, 5-year-olds benefit from visual information in general, 7-year-olds benefit from visual cues when a viewpoint change takes place, and 9-year-olds do not benefit from the availability of visual cues for orientation but rely on self-movement cues instead. Results are discussed in terms of the adaptive combination model (Newcombe & Huttenlocher, 2006).  相似文献   

19.
To determine the role of sensory information in golf putting 22 subjects were classified as either high or low in skill. Subjects from both groups putted from two distances (5 and 15 ft.) under three different conditions: relevant visual cues (look at ball), no visual cues (blindfolded), and irrelevant visual cues (look at offset marker). The 2 X 2 X 3 analysis of variance with radial error as the dependent variable indicated significant main effects for each factor but no significant interactions. Relevant visual cues provided greater accuracy than did no visual cues or irrelevant visual cues.  相似文献   

20.
Using featural cues such as colour to identify ephemeral food can increase foraging efficiency. Featural cues may change over time however; therefore, animals should use spatial cues to relocate food that occurs in a temporally stable position. We tested this hypothesis by measuring the cue preferences of captive greenfinches Carduelis chloris when relocating food hidden in a foraging tray. In these standardised associative learning trials, greenfinches favoured colour cues when returning to a foraging context that they had encountered before only once (“one-trial test”) but switched to spatial cues when they had encountered that scenario on ten previous occasions (“repeated-trial test”). We suggest that repeated encounters generated a context in which individuals had a prior expectation of temporal stability, and hence context-dependent cue selection. Next, we trained birds to find food in the absence of colour cues but tested them in the presence of visual distracters. Birds were able to learn spatial cues after one encounter, but only when visual distracters were identical in colouration. When a colourful distracter was present in the test phase, cue selection was random. Unlike the first one-trial test, birds were not biased towards this colourful visual distracter. Together, these results suggest that greenfinches are able to learn both cue types, colour cue biases represent learning, not simply distraction, and spatial cues are favoured over colour cues only in temporally stable contexts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号