排序方式: 共有84条查询结果,搜索用时 15 毫秒
21.
Ansgar D. Endress Sarah Carden Elisabetta Versace Marc D. Hauser 《Animal cognition》2010,13(3):483-495
A wide variety of organisms produce actions and signals in particular temporal sequences, including the motor actions recruited
during tool-mediated foraging, the arrangement of notes in the songs of birds, whales and gibbons, and the patterning of words
in human speech. To accurately reproduce such events, the elements that comprise such sequences must be memorized. Both memory
and artificial language learning studies have revealed at least two mechanisms for memorizing sequences, one tracking co-occurrence
statistics among items in sequences (i.e., transitional probabilities) and the other one tracking the positions of items in sequences, in particular those of items in sequence-edges. The latter mechanism seems to dominate the encoding
of sequences after limited exposure, and to be recruited by a wide array of grammatical phenomena. To assess whether humans
differ from other species in their reliance on one mechanism over the other after limited exposure, we presented chimpanzees
(Pan troglodytes) and human adults with brief exposure to six items, auditory sequences. Each sequence consisted of three distinct sound types
(X, A, B), arranged according to two simple temporal rules: the A item always preceded the B item, and the sequence-edges
were always occupied by the X item. In line with previous results with human adults, both species primarily encoded positional
information from the sequences; that is, they kept track of the items that occurred in the sequence-edges. In contrast, the
sensitivity to co-occurrence statistics was much weaker. Our results suggest that a mechanism to spontaneously encode positional
information from sequences is present in both chimpanzees and humans and may represent the default in the absence of training
and with brief exposure. As many grammatical regularities exhibit properties of this mechanism, it may be recruited by language
and constrain the form that certain grammatical regularities take. 相似文献
22.
Many pre-school children show closing-in behaviour (CIB) in graphic copying tasks: a tendency to place their copy abnormally
close to or even on top of the model. Similar phenomena have been studied in patients with dementia, though it is unclear
whether the superficial similarities between CIB in development and dementia reflect common underlying cognitive mechanisms.
The aim of the present study was to investigate the cognitive functions involved in CIB in pre-school children. Forty-one
children (3–5 years) were assessed for CIB, and completed a neuropsychological battery targeting visuospatial abilities, short
term memory (verbal and spatial) and attention (sustained attention, selective attention and attention switching). Binary
logistic regression found that performance on the attention subtests was the best unique predictor of CIB. A second analysis,
in which the three attention subtests were entered as separate predictors, suggested that attention switching ability was
most strongly related to CIB. These results support the view that CIB in children reflects inadequate attentional control.
The convergence of these results with similar observations in patients with dementia further suggests that similar cognitive
factors underlie CIB in these two populations. 相似文献
23.
Quantity discrimination is adaptive in a variety of ecological contexts and different taxa discriminate stimuli differing
in numerousness, both in the wild and in laboratory settings. Quantity discrimination between object arrays has been suggested
to be more demanding than between food arrays but, to our knowledge, the same paradigm has never been used to directly compare
them. We investigated to what extent capuchin monkeys’ relative numerousness judgments (RNJs) with food and token are alike.
Tokens are inherently non-valuable objects that acquire an associative value upon exchange with the experimenter. Our aims
were (1) to assess capuchins’ RNJs with food (Experiment 1) and with tokens (Experiment 2) by presenting all the possible
pair-wise choices between one to five items, and (2) to evaluate on which of the two proposed non-verbal mechanisms underlying
quantity discrimination (analogue magnitude and object file system) capuchins relied upon. In both conditions capuchins reliably
selected the larger amount of items, although their performance was higher with food than with tokens. The influence of the
ratio between arrays on performance indicates that capuchins relied on the same system for numerical representation, namely
analogue magnitude, regardless of the type of stimuli (food or tokens) and across both the small and large number ranges. 相似文献
24.
Elisabetta Pastori 《Journal of Applied Logic》2012,10(1):85-91
In this paper we compare two of Hrushovski?s constructions which arise in different contexts (Hrushovski, preprint [5], Hrushovski, 1989 [4]) and we show that under certain hypotheses they coincide. Using a group theoretic approach we also exhibit a structure which satisfies these hypotheses. 相似文献
25.
26.
27.
Poor hand-pointing to sounds in right brain-damaged patients: not just a problem of spatial-hearing 总被引:1,自引:0,他引:1
We asked 22 right brain-damaged (RBD) patients and 11 elderly healthy controls to perform hand-pointing movements to free-field unseen sounds, while modulating two non-auditory variables: the initial position of the responding hand (left, centre or right) and the presence or absence of task-irrelevant ambient vision. RBD patients suffering from visual neglect, unlike RBD patients without neglect and healthy controls, showed a systematic rightward error in sound localisation, which was modulated by the non-auditory variables. Localisation errors were exacerbated by initial hand-position to the right of the body-midline, and reduced by the leftwards initial hand-position. Moreover, for the visual neglect patients, mere presence of ambient vision worsened localisation errors. These results demonstrate that although hand-pointing to sounds has often been considered a straightforward approach to investigate sound-localisation abilities in brain-damaged patients, in some patients it may actually reveal localisation deficits that reflect a combination of impaired spatial-hearing and spatial biases from other sensory modalities (i.e., vision and proprioception). 相似文献
28.
Animal Cognition - The commentary by Gallup and Anderson (Anim Cogn https://doi.org/10.1007/s10071-021-01538-9 , 2021) on the original article by Baragli, Scopa, Maglieri, and Palagi (Anim Cogn... 相似文献
29.
Multisensory-mediated auditory localization 总被引:1,自引:0,他引:1
Multisensory integration is a powerful mechanism for maximizing sensitivity to sensory events. We examined its effects on auditory localization in healthy human subjects. The specific objective was to test whether the relative intensity and location of a seemingly irrelevant visual stimulus would influence auditory localization in accordance with the inverse effectiveness and spatial rules of multisensory integration that have been developed from neurophysiological studies with animals [Stein and Meredith, 1993 The Merging of the Senses (Cambridge, MA: MIT Press)]. Subjects were asked to localize a sound in one condition in which a neutral visual stimulus was either above threshold (supra-threshold) or at threshold. In both cases the spatial disparity of the visual and auditory stimuli was systematically varied. The results reveal that stimulus salience is a critical factor in determining the effect of a neutral visual cue on auditory localization. Visual bias and, hence, perceptual translocation of the auditory stimulus appeared when the visual stimulus was supra-threshold, regardless of its location. However, this was not the case when the visual stimulus was at threshold. In this case, the influence of the visual cue was apparent only when the two cues were spatially coincident and resulted in an enhancement of stimulus localization. These data suggest that the brain uses multiple strategies to integrate multisensory information. 相似文献
30.
The perception of tactile stimuli on the face is modulated if subjects concurrently observe a face being touched; this effect is termed "visual remapping of touch" or the VRT effect. Given the high social value of this mechanism, we investigated whether it might be modulated by specific key information processed in face-to-face interactions: facial emotional expression. In two separate experiments, participants received tactile stimuli, near the perceptual threshold, either on their right, left, or both cheeks. Concurrently, they watched several blocks of movies depicting a face with a neutral, happy, or fearful expression that was touched or just approached by human fingers (Experiment 1). Participants were asked to distinguish between unilateral and bilateral felt tactile stimulation. Tactile perception was enhanced when viewing touch toward a fearful face compared with viewing touch toward the other two expressions. In order to test whether this result can be generalized to other negative emotions or whether it is a fear-specific effect, we ran a second experiment, where participants watched movies of faces-touched or approached by fingers-with either a fearful or an angry expression (Experiment 2). In line with the first experiment, tactile perception was enhanced when subjects viewed touch toward a fearful face and not toward an angry face. Results of the present experiments are interpreted in light of different mechanisms underlying different emotions recognition, with a specific involvement of the somatosensory system when viewing a fearful expression and a resulting fear-specific modulation of the VRT effect. (PsycINFO Database Record (c) 2012 APA, all rights reserved). 相似文献