首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Using a visual and an acoustic sample set that appeared to favour the auditory modality of the monkey subjects, in Experiment 1 retention gradients generated in closely comparable visual and auditory matching (go/no-go) tasks revealed a more durable short-term memory (STM) for the visual modality. In Experiment 2, potentially interfering visual and acoustic stimuli were introduced during the retention intervals of the auditory matching task. Unlike the case of visual STM, delay-interval visual stimulation did not affect auditory STM. On the other hand, delay-interval music decreased auditory STM, confirming that the monkeys maintained an auditory trace during the retention intervals. Surprisingly, monkey vocalizations injected during the retention intervals caused much less interference than music. This finding, which was confirmed by the results of Experiments 3 and 4, may be due to differential processing of “arbitrary” (the acoustic samples) and species-specific (monkey vocalizations) sounds by the subjects. Although less robust than visual STM, auditory STM was nevertheless substantial, even with retention intervals as long as 32 sec.  相似文献   

2.
Three experiments examined the ability of birds to discriminate between the actions of walking forwards and backwards as demonstrated by video clips of a human walking a dog. Experiment 1 revealed that budgerigars (Melopsittacus undulates) could discriminate between these actions when the demonstrators moved consistently from left to right. Test trials then revealed that the discrimination transferred, without additional training, to clips of the demonstrators moving from right to left. Experiment 2 replicated the findings from Experiment 1 except that the demonstrators walked as if on a treadmill in the center of the display screen. The results from the first 2 experiments were replicated with pigeons in Experiment 3. The results cannot be explained if it is assumed that animals rely on static cues, such as those derived from individual postures, in order to discriminate between the actions of another animal. Instead, this type of discrimination appears to be controlled by dynamic cues derived from changes in the posture of the demonstrators.  相似文献   

3.
When experiencing aggression from group members, chimpanzees commonly produce screams. These agonistic screams are graded signals and vary acoustically as a function of the severity of aggression the caller is facing. We conducted a series of field playback experiments with a community of wild chimpanzees in the Budongo Forest, Uganda, to determine whether individuals could meaningfully distinguish between screams given in different agonistic contexts. We compared six subjects’ responses to screams given in response to severe and mild aggression. Subjects consistently discriminated between the two scream types. To address the possibility that the response differences were driven directly by the screams’ peripheral acoustic features, rather than any attached social meaning, we also tested the subjects’ responses to tantrum screams. These screams are given by individuals that experienced social frustration, but no physical threat, yet acoustically they are very similar to screams of victims of severe aggression. We found chimpanzees looked longer at severe victim screams than either mild victim screams or tantrum screams. Our results indicate that chimpanzees attend to the informational content of screams and are able to distinguish between different scream variants, which form part of a graded continuum.  相似文献   

4.
Similarity and categorization of environmental sounds   总被引:1,自引:0,他引:1  
Four experiments investigated the acoustical correlates of similarity and categorization judgments of environmental sounds. In Experiment 1, similarity ratings were obtained from pairwise comparisons of recordings of 50 environmental sounds. A three-dimensional multidimensional scaling (MDS) solution showed three distinct clusterings of the sounds, which included harmonic sounds, discrete impact sounds, and continuous sounds. Furthermore, sounds from similar sources tended to be in close proximity to each other in the MDS space. The orderings of the sounds on the individual dimensions of the solution were well predicted by linear combinations of acoustic variables, such as harmonicity, amount of silence, and modulation depth. The orderings of sounds also correlated significantly with MDS solutions for similarity ratings of imagined sounds and for imagined sources of sounds, obtained in Experiments 2 and 3--as was the case for free categorization of the 50 sounds (Experiment 4)--although the categorization data were less well predicted by acoustic features than were the similarity data.  相似文献   

5.
Two experiments investigated whether a species of songbird perceives missing fundamentals in sounds containing complex frequencies. In Experiment 1, European starlings were trained to discriminate between two sinusoids. This discrimination persisted when the sinusoids were replaced with waveforms composed solely of four consecutive higher harmonics of the training frequencies. In Experiment 2, starlings trained to discriminate between two complex frequencies consisting of sets of higher harmonics transferred the discrimination to the sinusoidal fundamentals. The results demonstrate that starlings can perceive harmonic or periodic structure, and show that a species of songbird can use harmonic structure to gain information about its auditory environment. The findings, together with those obtained from fish and mammals, suggest that periodicity pitch perception may be a general process in vertebrate hearing.  相似文献   

6.
Face recognition impairments are well documented in older children with Autism Spectrum Disorders (ASD); however, the developmental course of the deficit is not clear. This study investigates the progressive specialization of face recognition skills in children with and without ASD. Experiment 1 examines human and monkey face recognition in 2-year-old children with ASD, matched for nonverbal mental age (NVMA) with developmentally delayed (DD) children, and typically developing children (TD), using the Visual Paired Comparison (VPC) paradigm. Results indicate that, consistent with the other-species effect, TD controls show enhanced recognition of human but not monkey faces; however, neither the ASD nor the DD group show evidence of face recognition regardless of the species. Experiment 2 examines the same question in a group of older 3- to 4-year-old developmentally disabled (ASD and DD) children as well as in typical controls. In this experiment, both human and monkey faces are recognized by all three groups. The results of Experiments 1 and 2 suggest that difficulties in face processing, as measured by the VPC paradigm, are common in toddlers with ASD as well as DD, but that these deficits tend to disappear by early preschool age. In addition, the experiments show that higher efficacy of incidental encoding and recognition of facial identity in a context of passive exposure is positively related to nonverbal cognitive skills and age, but not to overall social interaction skills or greater attention to faces exhibited in naturalistic contexts.  相似文献   

7.
Some nonhuman primates have demonstrated the capacity to communicate about external objects or events, suggesting primate vocalizations can function as referential signals. However, there is little convincing evidence for functionally referential communication in any great ape species. Here, the authors demonstrate that wild chimpanzees (Pan troglodytes schweinfurthii) of Budongo forest, Uganda, give acoustically distinct screams during agonistic interactions depending on the role they play in a conflict. The authors analyzed the acoustic structure of screams of 14 individuals, in the role of both aggressor and victim. The authors found consistent differences in the acoustic structure of the screams, across individuals, depending on the social role the individual played during the conflict. The authors propose that these 2 distinct scream variants, produced by victims and aggressors during agonistic interactions, may be promising candidates for functioning as referential signals.  相似文献   

8.
The authors' goal was to provide a better understanding of the relationship between vocal production and perception in nonhuman primate communication. To this end, the authors examined the cotton-top tamarin's (Saguinus oedipus) combination long call (CLC). In Part 1 of this study, the authors carried out a series of acoustic analyses designed to determine the kind of information potentially encoded in the tamarin's CLC. Using factorial analyses of variance and multiple discriminant analyses, the authors explored whether the CLC encodes 3 types of identity information: individual, sex, and social group. Results revealed that exemplars could be reliably assigned to these 3 functional classes on the basis of a suite of spectrotemporal features. In Part 2 of this study, the authors used a series of habituation-dishabituation playback experiments to test whether tamarins attend to the encoded information about individual identity. The authors 1st tested for individual discrimination when tamarins were habituated to a series of calls from 1 tamarin and then played back a test call from a novel tamarin; both opposite- and same-sex pairings were tested. Results showed that tamarins dishabituated when caller identity changed but transferred habituation when caller identity was held constant and a new exemplar was played (control condition). Follow-up playback experiments revealed an asymmetry between the authors' acoustic analyses of individual identity and the tamarins' capacity to discriminate among vocal signatures; whereas all colony members have distinctive vocal signatures, we found that not all tamarins were equally discriminable based on the habituation-dishabituation paradigm.  相似文献   

9.
The perception of continuously repeating auditory patterns by European starlings was explored in seven experiments. In Experiment 1, 4 starlings learned to discriminate between two continuously repeating, eight-element, auditory patterns. Each eight-element pattern was constructed from different temporal organizations of two elements differing in timbre. In Experiments 2–7, the repeating patterns were transformed in ways designed to identify the starlings’ perceptual organization of the patterns. In Experiment 2, the starlings identified patterns beginning with novel starting points. In Experiment 3, discrimination performance was adversely affected by reorganizing the elements in the patterns. In Experiments 4 and 5, the pattern elements were altered. In Experiment 4, the patterns were constructed from two novel elements. In Experiment 5, the temporal location of the two pattern elements was reversed. The transformations of the patterns in Experiments 4 and 5 affected discrimination performance for some, but not all, of the starlings. In Experiments 6 and 7, replacing either of the two elements with silent intervals had no effect on discrimination performance. The results of these experiments identify basic grouping principles that starlings use when they perceive auditory patterns.  相似文献   

10.
Complex sounds vary along a number of acoustic dimensions. These dimensions may exhibit correlations that are familiar to listeners due to their frequent occurrence in natural sounds—namely, speech. However, the precise mechanisms that enable the integration of these dimensions are not well understood. In this study, we examined the categorization of novel auditory stimuli that differed in the correlations of their acoustic dimensions, using decision bound theory. Decision bound theory assumes that stimuli are categorized on the basis of either a single dimension (rule based) or the combination of more than one dimension (information integration) and provides tools for assessing successful integration across multiple acoustic dimensions. In two experiments, we manipulated the stimulus distributions such that in Experiment 1, optimal categorization could be accomplished by either a rule-based or an information integration strategy, while in Experiment 2, optimal categorization was possible only by using an information integration strategy. In both experiments, the pattern of results demonstrated that unidimensional strategies were strongly preferred. Listeners focused on the acoustic dimension most closely related to pitch, suggesting that pitch-based categorization was given preference over timbre-based categorization. Importantly, in Experiment 2, listeners also relied on a two-dimensional information integration strategy, if there was immediate feedback. Furthermore, this strategy was used more often for distributions defined by a negative spectral correlation between stimulus dimensions, as compared with distributions with a positive correlation. These results suggest that prior experience with such correlations might shape short-term auditory category learning.  相似文献   

11.
Two experiments investigated the role of verbalization in memory for environmental sounds. Experiment i extended earlier research (Bower & Holyoak, 1973) showing that sound recognition is highly dependent upon consistent verbal interpretation at input and test. While such a finding implies an important role for verbalization, Experiment 2 suggested that verbalization is not the only efficacious strategy for encoding environmental sounds. Recognition after presentation of sounds was shown to differ qualitatively from recognition after presentation of sounds accompanied with interpretative verbal labels and from recognition after presentation of verbal labels alone. The results also suggest that encoding physical information about sounds is of greater importance for sound recognition than for verbal free recall, and that verbalization is of greater importance for free recall than for recognition. Several alternative frameworks for the results are presented, and separate retrieval and discrimination processes in recognition are proposed.  相似文献   

12.
If temporal position of a frequency inflection is the most salient communication cue in Japanese macaque smooth early and smooth late high coos, then macaques should perceive coos differing only along the early-late dimension as belonging to different classes. The perceived similarity of synthetic coos and temporally reversed variants were evaluated, using multidimensional scaling of macaque discrimination latencies. Original calls and calls temporally reversed in the frequency domain could be discriminated if the peak was near a call endpoint but not if the frequency peak in the original call was near the coo midpoint. Perceived similarity of such calls was inversely related to the amount of frequency modulation. Temporal reversals of amplitude contours were also conducted. Although macaques are quite sensitive to amplitude increments, reversal of the relatively flat amplitude contours of these calls did not affect discrimination responses.  相似文献   

13.
Origins of number sense. Large-number discrimination in human infants   总被引:13,自引:0,他引:13  
Four experiments investigated infants' sensitivity to large, approximate numerosities in auditory sequences. Prior studies provided evidence that 6-month-old infants discriminate large numerosities that differ by a ratio of 2.0, but not 1.5, when presented with arrays of visual forms in which many continuous variables are controlled. The present studies used a head-turn preference procedure to test for infants' numerosity discrimination with auditory sequences designed to control for element duration, sequence duration, interelement interval, and amount of acoustic energy. Six-month-old infants discriminated 16 from 8 sounds but failed to discriminate 12 from 8 sounds, providing evidence that the same 2.0 ratio limits numerosity discrimination in auditory-temporal sequences and visual-spatial arrays. Nine-month-old infants, in contrast, successfully discriminated 12 from 8 sounds, but not 10 from 8 sounds, providing evidence that numerosity discrimination increases in precision over development, prior to the emergence of language or symbolic counting.  相似文献   

14.
Discriminating personally significant from nonsignificant sounds is of high behavioral relevance and appears to be performed effortlessly outside of the focus of attention. Although there is no doubt that we automatically monitor our auditory environment for unexpected, and hence potentially significant, events, the characteristics of detection mechanisms based on individual memory schemata have been far less explored. The experiments in the present study were designed to measure event-related potentials (ERPs) sensitive to the discrimination of personally significant and nonsignificant nonlinguistic sounds. Participants were presented with random sequences of acoustically variable sounds, one of which was associated with personal significance for each of the participants. In Experiment 1, each participant’s own mobile SMS ringtone served as his or her significant sound. In Experiment 2, a nonsignificant sound was instead trained to become personally significant to each participant over a period of one month. ERPs revealed differential processing of personally significant and nonsignificant sounds from about 200 ms after stimulus onset, even when the sounds were task-irrelevant. We propose the existence of a mechanism for the detection of significant sounds that does not rely on the detection of acoustic deviation. From a comparison of the results from our active- and passive-listening conditions, this discriminative process based on individual memory schemata seems to be obligatory, whereas the impact of individual memory schemata on further stages of auditory processing may require top-down guidance.  相似文献   

15.
Operant conditioning and multidimensional scaling procedures were used to study auditory perception of complex sounds in the budgerigar. In a same-different discrimination task, budgerigars learned to discriminate among natural vocal signals. Multidimensional scaling procedures were used to arrange these complex acoustic stimuli in a two-dimensional space reflecting perceptual organization. Results show that budgerigars group vocal stimuli according to functional and acoustical categories. Studies with only contact calls show that birds also make within-category discriminations. The acoustic cues in contact calls most salient to budgerigars appear to be quite complex. There is a suggestion that the sex of the signaler may also be encoded in these calls. The results from budgerigars were compared with the results from humans tested on some of the same sets of complex sounds.  相似文献   

16.
《Learning and motivation》1987,18(3):235-260
Pigeons were trained on a successive discrimination task using complex visual stimuli. In Experiment 1, each photographic slide that contained a person had a corresponding “matched background” slide, one that showed the same scene with the person removed. Birds trained on a human positive discrimination acquired the matched pairs problem, but birds trained on a human negative discrimination performed poorly. This suggests a feature-positive effect for complex stimulus categories. Memorization control groups that were trained on a human-irrelevant discrimination also performed poorly with matched slides. However, subsequent experiments demonstrated that these effects depended on the use of matched pairs of slides. The human-as-feature effect was not obtained when human positive and human negative groups were subsequently trained with non-matched slides (Experiment 2), and memorization control groups acquired a human-irrelevant discrimination when trained with nonmatched slides (Experiment 3). Additional tests conducted in Experiments 2 and 3 found that performance was not disrupted when either the reinforced or nonreinforced slides were replaced. This effect was obtained when the category was relevant to the discrimination (Experiment 2) and when the category was irrelevant to the discrimination (Experiment 3). Finally, Experiment 4 demonstrated that memorization of a set of slides is possible when slides are sufficiently dissimilar, (i.e., nonmatched) but performance is not as good when the category exemplars are irrelevant to the discrimination.  相似文献   

17.
Six experiments investigated the locus of the recency effect in immediate serial recall. Previous research has shown much larger recency for speech as compared to non-speech sounds. We compared two hypotheses: (1) speech sounds are processed differently from non-speech sounds (e.g. Liberman & Mattingly, 1985); and (2) speech sounds are more familiar and more discriminable than non-speech sounds (e.g. Nairne, 1988, 1990). In Experiments 1 and 2 we determined that merely varying the label given to the sets of stimuli (speech or non-speech) had no effect on recency or overall recall. We varied the familiarity of the stimuli by using highly trained musicians as subjects (Experiments 3 and 4) and by instructing subjects to attend to an unpracticed dimension of speech (Experiment 6). Discriminability was manipulated by varying the acoustic complexity of the stimuli (Experiments 3, 5, and 6) or the pitch distance between the stimuli (Experiment 4). Although manipulations of discriminability and familiarity affected overall level of recall greatly, in no case did discriminability or familiarity alone significantly enhance recency. What seems to make a difference in the occurrence of convincing recency is whether the items being remembered are undegraded speech sounds.  相似文献   

18.
The speed of naming the color of a colored square was examined with acoustic distraction to study the effects of the formation of a mental representation (neural model) of distractors. In Experiment 1, practice with the distractors (color words, noncolor words, and tones) was examined, and in Experiments 2 and 3 each color-naming trial was preceded by preexposures to sounds that could be dissimilar, similar, or identical to the upcoming auditory distractor. Consistency in the identity of ignored sounds, whether during color-naming practice or between preexposures and test, reduced interference with color naming. Consistency in voice played no role, and attended preexposures were ineffective in reducing interference. Given these results, the authors propose that mental representations of distractors include information about their task relevance, which modulates disruption of the primary task.  相似文献   

19.
To test for possible functional referentiality in a common domestic cat (Felis catus) vocalization, the authors conducted 2 experiments to examine whether human participants could classify meow sounds recorded from 12 different cats in 5 behavioral contexts. In Experiment 1, participants heard singlecalls, whereas in Experiment 2, bouts of calls were presented. In both cases, classification accuracy was significantly above chance, but modestly so. Accuracy for bouts exceeded that for single calls. Overall, participants performed better in classifying individual calls if they had lived with, interacted with, and had a general affinity for cats. These results provide little evidence of referentiality suggesting instead that meows are nonspecific, somewhat negatively toned stimuli that attract attention from humans. With experience, human listeners can become more proficient at inferring positive-affect states from cat meows.  相似文献   

20.
Three experiments are reported that collectively show that listeners perceive speech sounds as contrasting auditorily with neighboring sounds. Experiment 1 replicates the well-established finding that listeners categorize more of a [d–g] continuum as [g] after [l] than after [r]. Experiments 2 and 3 show that listeners discriminate stimuli in which the energy concentrations differ in frequency between the spectra of neighboring sounds better than those in which they do not differ. In Experiment 2, [alga–arda] pairs, in which the energy concentrations in the liquid-stop sequences are H(igh) L(ow)–LH, were more discriminable than [alda–arga] pairs, in which they are HH–LL. In Experiment 3, [da] and [ga] syllables were more easily discriminated when they were preceded by lower and higher pure tones, respectively—that is, tones that differed from the stops’ higher and lower F3 onset frequencies—than when they were preceded by H and L pure tones with similar frequencies. These discrimination results show that contrast with the target’s context exaggerates its perceived value when energy concentrations differ in frequency between the target’s spectrum and its context’s spectrum. Because contrast with its context does more that merely shift the criterion for categorizing the target, it cannot be produced by neural adaptation. The finding that nonspeech contexts exaggerate the perceived values of speech targets also rules out compensation for coarticulation by showing that their values depend on the proximal auditory qualities evoked by the stimuli’s acoustic properties, rather than the distal articulatory gestures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号