首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Similar to certain bats and dolphins, some blind humans can use sound echoes to perceive their silent surroundings. By producing an auditory signal (e.g., a tongue click) and listening to the returning echoes, these individuals can obtain information about their environment, such as the size, distance, and density of objects. Past research has also hinted at the possibility that blind individuals may be able to use echolocation to gather information about 2-D surface shape, with definite results pending. Thus, here we investigated people’s ability to use echolocation to identify the 2-D shape (contour) of objects. We also investigated the role played by head movements—that is, exploratory movements of the head while echolocating—because anecdotal evidence suggests that head movements might be beneficial for shape identification. To this end, we compared the performance of six expert echolocators to that of ten blind nonecholocators and ten blindfolded sighted controls in a shape identification task, with and without head movements. We found that the expert echolocators could use echoes to determine the shapes of the objects with exceptional accuracy when they were allowed to make head movements, but that their performance dropped to chance level when they had to remain still. Neither blind nor blindfolded sighted controls performed above chance, regardless of head movements. Our results show not only that experts can use echolocation to successfully identify 2-D shape, but also that head movements made while echolocating are necessary for the correct identification of 2-D shape.  相似文献   

2.
Suprasegmental acoustic patterns in speech can convey meaningful information and affect listeners' interpretation in various ways, including through systematic analog mapping of message-relevant information onto prosody. We examined whether the effect of analog acoustic variation is governed by the acoustic properties themselves. For example, fast speech may always prime the concept of speed or a faster response. Alternatively, the effect may be modulated by the context-dependent interpretation of those properties; the effect of rate may depend on how listeners construe its meaning in the immediate linguistic or communicative context. In two experiments, participants read short scenarios that implied, or did not imply, urgency. Scenarios were followed by recorded instructions, spoken at varying rates. The results show that speech rate had an effect on listeners' response speed; however, this effect was modulated by discourse context. Speech rate affected response speed following contexts that emphasized speed, but not without such contextual information.  相似文献   

3.
A wide variety of organisms produce actions and signals in particular temporal sequences, including the motor actions recruited during tool-mediated foraging, the arrangement of notes in the songs of birds, whales and gibbons, and the patterning of words in human speech. To accurately reproduce such events, the elements that comprise such sequences must be memorized. Both memory and artificial language learning studies have revealed at least two mechanisms for memorizing sequences, one tracking co-occurrence statistics among items in sequences (i.e., transitional probabilities) and the other one tracking the positions of items in sequences, in particular those of items in sequence-edges. The latter mechanism seems to dominate the encoding of sequences after limited exposure, and to be recruited by a wide array of grammatical phenomena. To assess whether humans differ from other species in their reliance on one mechanism over the other after limited exposure, we presented chimpanzees (Pan troglodytes) and human adults with brief exposure to six items, auditory sequences. Each sequence consisted of three distinct sound types (X, A, B), arranged according to two simple temporal rules: the A item always preceded the B item, and the sequence-edges were always occupied by the X item. In line with previous results with human adults, both species primarily encoded positional information from the sequences; that is, they kept track of the items that occurred in the sequence-edges. In contrast, the sensitivity to co-occurrence statistics was much weaker. Our results suggest that a mechanism to spontaneously encode positional information from sequences is present in both chimpanzees and humans and may represent the default in the absence of training and with brief exposure. As many grammatical regularities exhibit properties of this mechanism, it may be recruited by language and constrain the form that certain grammatical regularities take.  相似文献   

4.
Over the past 30 years hemispheric asymmetries in speech perception have been construed within a domain-general framework, according to which preferential processing of speech is due to left-lateralized, non-linguistic acoustic sensitivities. A prominent version of this argument holds that the left temporal lobe selectively processes rapid/temporal information in sound. Acoustically, this is a poor characterization of speech and there has been little empirical support for a left-hemisphere selectivity for these cues. In sharp contrast, the right temporal lobe is demonstrably sensitive to specific acoustic properties. We suggest that acoustic accounts of speech sensitivities need to be informed by the nature of the speech signal and that a simple domain-general vs. domain-specific dichotomy may be incorrect.  相似文献   

5.
Object constancy, the ability to recognize objects despite changes in orientation, has not been well studied in the auditory modality. Dolphins use echolocation for object recognition, and objects ensonified by dolphins produce echoes that can vary significantly as a function of orientation. In this experiment, human listeners had to classify echoes from objects varying in material, shape, and size that were ensonified with dolphin signals. Participants were trained to discriminate among the objects using an 18-echo stimulus from a 10° range of aspect angles, then tested with novel aspect angles across a 60° range. Participants were typically successful recognizing the objects at all angles (M = 78 %). Artificial neural networks were trained and tested with the same stimuli with the purpose of identifying acoustic cues that enable object recognition. A multilayer perceptron performed similarly to the humans and revealed that recognition was enabled by both the amplitude and frequency of echoes, as well as the temporal dynamics of these features over the course of echo trains. These results provide insight into representational processes underlying echoic recognition in dolphins and suggest that object constancy perceived through the auditory modality is likely to parallel what has been found in the visual domain in studies with both humans and animals.  相似文献   

6.
In the last two decades the integrative role of the frontal premotor cortex (a mosaic of agranular/disgranular areas lying in front of the primary motor cortex) have been more and more elucidated. Among its various functions, sensorimotor transformation, and action representation storage, also for nonstrictly motor purposes, are the most intriguing properties of this region, as shown by several researches. In this article we will mainly focus on the ventro-rostral part of the monkey premotor cortex (area F5) in which visual information describing objects and others' acting hands are associated with goal-directed motor representations of hand movements. We will describe the main characteristics of F5 premotor neurons and we will provide evidence in favor of a parallelism between monkeys and humans on the basis of new experimental observations. Finally, we will present some data indicating that, both in humans and in monkeys, action-related sensorimotor transformations are not restricted to visual information but concern also acoustic information.  相似文献   

7.
Perceptual theories must explain how perceivers extract meaningful information from a continuously variable physical signal. In the case of speech, the puzzle is that little reliable acoustic invariance seems to exist. We tested the hypothesis that speech-perception processes recover invariants not about the signal, but rather about the source that produced the signal. Findings from two manipulations suggest that the system learns those properties of speech that result from idiosyncratic characteristics of the speaker; the same properties are not learned when they can be attributed to incidental factors. We also found evidence for how the system determines what is characteristic: In the absence of other information about the speaker, the system relies on episodic order, representing those properties present during early experience as characteristic of the speaker. This "first-impressions" bias can be overridden, however, when variation is an incidental consequence of a temporary state (a pen in the speaker's mouth), rather than characteristic of the speaker.  相似文献   

8.
It is widely held that children's linguistic input underdetermines the correct grammar, and that language learning must therefore be guided by innate linguistic constraints. Here, we show that a Bayesian model can learn a standard poverty-of-stimulus example, anaphoric one , from realistic input by relying on indirect evidence, without a linguistic constraint assumed to be necessary. Our demonstration does, however, assume other linguistic knowledge; thus, we reduce the problem of learning anaphoric one to that of learning this other knowledge. We discuss whether this other knowledge may itself be acquired without linguistic constraints.  相似文献   

9.
Blind persons emit sounds to detect objects by echolocation. Both perceived pitch and perceived loudness of the emitted sound change as they fuse with the reflections from nearby objects. Blind persons generally are better than sighted at echolocation, but it is unclear whether this superiority is related to detection of pitch, loudness, or both. We measured the ability of twelve blind and twenty-five sighted listeners to determine which of two sounds, 500 ms noise bursts, that had been recorded in the presence of a reflecting object in a room with reflecting walls using an artificial head. The sound pairs were original recordings differing in both pitch and loudness, or manipulated recordings with either the pitch or the loudness information removed. Observers responded using a 2AFC method with verbal feedback. For both blind and sighted listeners the performance declined more with the pitch information removed than with the loudness information removed. In addition, the blind performed clearly better than the sighted as long as the pitch information was present, but not when it was removed. Taken together, these results show that the ability to detect pitch is a main factor underlying high performance in human echolocation.  相似文献   

10.
Society's increasing reliance on robots in everyday life provides exciting opportunities for social psychologists to work with engineers in the nascent field of social robotics. In contrast to industrial robots that, for example, may be used on an assembly line, social robots are designed specifically to interact with humans and/or other robots. People tend to perceive social robots as autonomous and capable of having a mind. As such, they are also more likely to be subject to social categorization by humans. As social robots become more human like, people may also feel greater empathy for them and treat robots more like (human) ingroup members. On the other hand, as they become more human like, robots also challenge our human distinctiveness, threaten our identity, and elicit suspicion about their ability to deceive us with their human‐like qualities. We review relevant research to explore this apparent paradox, particularly from an intergroup relations perspective. We discuss these findings and propose three research questions that we believe social psychologists are ideally suited to address.  相似文献   

11.
ABSTRACT

The perceptual brain is designed around multisensory input. Areas once thought dedicated to a single sense are now known to work with multiple senses. It has been argued that the multisensory nature of the brain reflects a cortical architecture for which task, rather than sensory system, is the primary design principle. This supramodal thesis is supported by recent research on human echolocation and multisensory speech perception. In this review, we discuss the behavioural implications of a supramodal architecture, especially as they pertain to auditory perception. We suggest that the architecture implies a degree of perceptual parity between the senses and that cross-sensory integration occurs early and completely. We also argue that a supramodal architecture implies that perceptual experience can be shared across modalities and that this sharing should occur even without bimodal experience. We finish by briefly suggesting areas of future research.  相似文献   

12.
Body size is an important feature that affects fighting ability; however, size-related parameters of agonistic vocalizations are difficult to manipulate because of anatomical constraints within the vocal production system. Rare examples of acoustic size modulation are due to specific features that enable the sender to steadily communicate exaggerated body size. However, one could argue that it would be more adaptive if senders could adjust their signaling behavior to the fighting potential of their actual opponent. So far there has been no experimental evidence for this possibility. We tested this hypothesis by exposing family dogs (Canis familiaris) to humans with potentially different fighting ability. In a within-subject experiment, 64 dogs of various breeds consecutively faced two threateningly approaching humans, either two men or two women of different stature, or a man and a woman of similar or different stature. We found that the dogs’ vocal responses were affected by the gender of the threatening stranger and the dog owner’s gender. Dogs with a female owner, or those dogs which came from a household where both genders were present, reacted with growls of lower values of the Pitch–Formant component (including deeper fundamental frequency and lower formant dispersion) to threatening men. Our results are the first to show that non-human animals react with dynamic alteration of acoustic parameters related to their individual indexical features (body size), depending on the level of threat in an agonistic encounter.  相似文献   

13.
It is generally believed that concepts can be characterized by their properties (or features). When investigating concepts encoded in language, researchers often ask subjects to produce lists of properties that describe them (i.e., the Property Listing Task, PLT). These lists are accumulated to produce Conceptual Property Norms (CPNs). CPNs contain frequency distributions of properties for individual concepts. It is widely believed that these distributions represent the underlying semantic structure of those concepts. Here, instead of focusing on the underlying semantic structure, we aim at characterizing the PLT. An often disregarded aspect of the PLT is that individuals show intersubject variability (i.e., they produce only partially overlapping lists). In our study we use a mathematical analysis of this intersubject variability to guide our inquiry. To this end, we resort to a set of publicly available norms that contain information about the specific properties that were informed at the individual subject level. Our results suggest that when an individual is performing the PLT, he or she generates a list of properties that is a mixture of general and distinctive properties, such that there is a non-linear tendency to produce more general than distinctive properties. Furthermore, the low generality properties are precisely those that tend not to be repeated across lists, accounting in this manner for part of the intersubject variability. In consequence, any manipulation that may affect the mixture of general and distinctive properties in lists is bound to change intersubject variability. We discuss why these results are important for researchers using the PLT.  相似文献   

14.
Panksepp J 《Psychological review》2003,110(2):376-88; discussion 389-96
M. S. Blumberg and G. Sokoloff's (2001) critical analysis has raised doubt whether emotional feelings can be studied in nonhuman animals, and they have reaffirmed the inappropriateness of anthropomorphic reasoning in animal research. They argue that the ultrasonic distress calls of infant rats may be little more than acoustic by-products of bodily adjustments to physiological stressors. This author argues that comparable vocalizations in other species do index separation distress. Considering that there may be deep homologies in the neural systems that govern such emotional processes in many mammalian species, anthropomorphic-zoomorphic reasoning may be a viable cross-species research strategy as long as it is limited to neuroscientific contexts that lead to testable predictions in humans and other animals.  相似文献   

15.
Numerosity judgments of small sets of items (≤ 3) are generally fast and error free, while response times and error rates increase rapidly for larger numbers of items. We investigated an efficient process used for judging small numbers of items (known as subitizing) in active touch. We hypothesized that this efficient process for numerosity judgment might be related to stimulus properties that allow for efficient (parallel) search. Our results showed that subitizing was not possible for raised lines among flat surfaces, whereas this type of stimulus could be detected in parallel over the fingers. However, subitizing was possible when the number of fingers touching a surface had to be judged while the other fingers were lowered in mid-air. In the latter case, the lack of tactile input is essential, since subitizing was not enabled by differences in proprioceptive information from the fingers. Our results show that subitizing using haptic information from the fingers is possible only when some fingers receive tactile information while other fingers do not.  相似文献   

16.
Fifteen‐month‐olds have difficulty detecting differences between novel words differing in a single vowel. Previous work showed that Australian English (AusE) infants habituated to the word‐object pair DEET detected an auditory switch to DIT and DOOT in Canadian English (CanE) but not in their native AusE (Escudero et al., 2014 ). The authors speculated that this may be because the vowel inherent spectral change variation (VISC) in AusE DEET is larger than in CanE DEET. We investigated whether VISC leads to difficulty in encoding phonetic detail during early word learning, and whether this difficulty dissipates with age. In Experiment 1, we familiarized AusE‐learning 15‐month‐olds to AusE DIT, which contains smaller VISC than AusE DEET. Unlike infants familiarized with AusE DEET (Escudero et al., 2014 ), infants detected a switch to DEET and DOOT. In Experiment 2, we familiarized AusE‐learning 17‐month‐olds to AusE DEET. This time, infants detected a switch to DOOT, and marginally detected a switch to DIT. Our acoustic analysis showed that AusE DEET and DOOT are differentiated by the second vowel formant, while DEET and DIT can only be distinguished by their changing dynamic properties throughout the vowel trajectory. Thus, by 17 months, AusE infants can encode highly dynamic acoustic properties, enabling them to learn the novel vowel minimal pairs that are difficult at 15 months. These findings suggest that the development of word learning is shaped by the phonetic properties of the specific word minimal pair.  相似文献   

17.
Over 30 years ago, it was suggested that difficulties in the ‘auditory organization’ of word forms in the mental lexicon might cause reading difficulties. It was proposed that children used parameters such as rhyme and alliteration to organize word forms in the mental lexicon by acoustic similarity, and that such organization was impaired in developmental dyslexia. This literature was based on an ‘oddity’ measure of children's sensitivity to rhyme (e.g. wood, book, good) and alliteration (e.g. sun, sock, rag). The ‘oddity’ task revealed that children with dyslexia were significantly poorer at identifying the ‘odd word out’ than younger children without reading difficulties. Here we apply a novel modelling approach drawn from auditory neuroscience to study the possible sensory basis of the auditory organization of rhyming and non‐rhyming words by children. We utilize a novel Spectral‐Amplitude Modulation Phase Hierarchy (S‐AMPH) approach to analysing the spectro‐temporal structure of rhyming and non‐rhyming words, aiming to illuminate the potential acoustic cues used by children as a basis for phonological organization. The S‐AMPH model assumes that speech encoding depends on neuronal oscillatory entrainment to the amplitude modulation (AM) hierarchy in speech. Our results suggest that phonological similarity between rhyming words in the oddity task depends crucially on slow (delta band) modulations in the speech envelope. Contrary to linguistic assumptions, therefore, auditory organization by children may not depend on phonemic information for this task. Linguistically, it is assumed that ‘book’ does not rhyme with ‘wood’ and ‘good’ because the final phoneme differs. However, our auditory analysis suggests that the acoustic cues to this phonological dissimilarity depend primarily on the slower amplitude modulations in the speech envelope, thought to carry prosodic information. Therefore, the oddity task may help in detecting reading difficulties because phonological similarity judgements about rhyme reflect sensitivity to slow amplitude modulation patterns. Slower amplitude modulations are known to be detected less efficiently by children with dyslexia.  相似文献   

18.
The standard methods for decomposition and analysis of evoked potentials are bandpass filtering, identification of peak amplitudes and latencies, and principal component analysis (PCA). We discuss the limitations of these and other approaches and introduce wavelet packet analysis. Then we propose the "single-channel wavelet packet model," a new approach in which a unique decomposition is achieved using prior time-frequency information and differences in the responses of the components to changes in experimental conditions. Orthogonal sets of wavelet packets allow a parsimonious time-frequency representation of the components. The method allows energy in some wavelet packets to be shared among two or more components, so the components are not necessarily orthogonal. The single-channel wavelet packet model and PCA both require constraints to achieve a unique decomposition. In PCA, however, the constraints are defined by mathematical convenience and may be unrealistic. In the single-channel wavelet packet model, the constraints are based on prior scientific knowledge. We give an application of the method to auditory evoked potentials recorded from cats. The good frequency resolution of wavelet packets allows us to separate superimposed components in these data. Our present approach yields estimates of component waveforms and the effects of experiment conditions on the amplitude of the components. We discuss future extensions that will provide confidence intervals and p values, allow for latency changes, and represent multichannel data.  相似文献   

19.
In defense of representation   总被引:10,自引:0,他引:10  
The computational paradigm, which has dominated psychology and artificial intelligence since the cognitive revolution, has been a source of intense debate. Recently, several cognitive scientists have argued against this paradigm, not by objecting to computation, but rather by objecting to the notion of representation. Our analysis of these objections reveals that it is not the notion of representation per se that is causing the problem, but rather specific properties of representations as they are used in various psychological theories. Our analysis suggests that all theorists accept the idea that cognitive processing involves internal information-carrying states that mediate cognitive processing. These mediating states are a superordinate category of representations. We discuss five properties that can be added to mediating states and examine their importance in various cognitive models. Finally, three methodological lessons are drawn from our analysis and discussion.  相似文献   

20.
The functional role of correlations between neuronal spike trains remains strongly debated. This debate partly stems from the lack of a standardized analysis technique capable of accurately quantifying the role of correlations in stimulus encoding. We believe that information theoretic measures may represent an objective method for analysing the functional role of neuronal correlations. Here we show that information analysis of pairs of spike trains allows the information content present in the firing rate to be disambiguated from any extra information that may be present in the temporal relationships of the two spike trains. We validate and illustrate the method by applying it to simulated data with variable degrees of known synchrony, and by applying it to recordings from pairs of sites in the primary visual cortex of anaesthetized cats. We discuss the importance of information theoretic analysis in elucidating the neuronal mechanisms underlying object identification.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号